What Is Artificial General Intelligence? Clarifying The Goal For Engineering And Evaluation

نویسنده

  • Mark R. Waser
چکیده

Artificial general intelligence (AGI) has no consensus definition but everyone believes that they will recognize it when it appears. Unfortunately, in reality, there is great debate over specific examples that range the gamut from exact human brain simulations to infinitely capable systems. Indeed, it has even been argued whether specific instances of humanity are truly generally intelligent. Lack of a consensus definition seriously hampers effective discussion, design, development, and evaluation of generally intelligent systems. We will address this by proposing a goal for AGI, rigorously defining one specific class of general intelligence architecture that fulfills this goal that a number of the currently active AGI projects appear to be converging towards, and presenting a simplified view intended to promote new research in order to facilitate the creation of a safe artificial general intelligence. Classifying Artificial Intelligence Defining and redefining “Artificial Intelligence” (AI) has become a perennial academic exercise so it shouldn’t be surprising that “Artificial General Intelligence” is now undergoing exactly the same fate. Pei Wang addressed this problem (Wang 2008) by dividing the definitions of AI into five broad classes based upon on how a given artificial intelligence would be similar to human intelligence: in structure, in behavior, in capability, in function, or in principle. Wang states that These working definitions of AI are all valid, in the sense that each of them corresponds to a description of the human intelligence at a certain level of abstraction, and sets a precise research goal, which is achievable to various extents. Each of them is also fruitful, in the sense that it has guided the research to produce results with intellectual and practical values. On the other hand, these working definitions are different, since they set different goals, require different methods, produce different results, and evaluate progress according to different criteria. Copyright © 2008, The Second Conference on Artificial General Intelligence (agi-09.org). All rights reserved. We contend that replacing the fourth level of abstraction (Functional-AI) with “similarity of architecture of mind (as opposed to brain)” and altering its boundary with the fifth would greatly improve the accuracy and usability this scheme for AGI. Since Stan Franklin proposed (Franklin 2007) that his LIDA architecture was “ideally suited to provide a working ontology that would allow for the discussion, design, and comparison of AGI systems” since it implemented and fleshed out a number of psychological and neuroscience theories of cognition and since the feasibility of this claim was quickly demonstrated when Franklin and the principals involved in NARS (Wang 2006), Novamente (Looks, Goertzel and Pennachin 2004), and Cognitive Constructor (Samsonovitch et. al. 2008) put together a comparative treatment of their four systems based upon that architecture (Franklin et al. 2007), we would place all of those systems in the new category. Making these changes leaves three classes based upon different levels of architecture, with Structure-AI equating to brain architecture and Principle-AI equating to the architecture of problem-solving, and two classes based upon emergent properties, behavior and capability. However, it must be noted that both of Wang’s examples of the behavioral category have moved to more of an architectural approach with Wang noting the migration of Soar (Lehman, Laird and Rosenbloom 2006; Laird 2008) and the recent combination of the symbolic system ACT-R (Anderson and Lebiere 1998, Anderson et al. 2004) with the connectionist [L]eabra (O’Reilly, and Munakata 2000), to produce SAL (Lebiere et al. 2008) as the [S]ynthesis of [A]CT-R and [L]ibra. Further, the capability category contains only examples of “Narrow AI” and Cyc (Lenat 1995) that arguably belongs to the Principle-AI category. Viewing them this way, we must argue vehemently with Wang’s contentions that “these five trails lead to different summits, rather than to the same one”, or that “to mix them together in one project is not a good idea.” To accept these arguments is analogous to resigning ourselves to being blind men who will attempt only to engineer an example of elephantness by focusing solely on a single view of elephantness, to the exclusion of all other views and to the extent of throwing out valuable information. While we certainly agree with the observations that “Many current AI projects have no clearly specified research goal, and people working on them often swing between different definitions of intelligence” and that this “causes inconsistency in the criteria of design and evaluation”, we believe that the solution is to maintain a single goaloriented focus on one particular definition while drawing clues and inspiration from all of the others. What Is The Goal of AGI? Thus far, we have classified intelligence and thus the goals of AI by three different levels of abstraction of architecture (i.e. what it is), how it behaves, and what it can do. Amazingly enough, what we haven’t chosen as a goal is what we want it to do. AGI researchers should be examining their own reasons for creating AGI both in terms of their own goals in creating AGI and the goals that they intend to pass on and have the AGI implement. Determining and codifying these goals would enable us to finally knowing the direction in which we are headed. It has been our observation that, at the most abstract level, there are two primary views of the potential goals of an AGI, one positive and one negative. The positive view generally seems to regard intelligence as a universal problem-solver and expects an AGI to contribute to solving the problems of the world. The negative view sees the power of intelligence and fears that humanity will be one of the problems that is solved. More than anything else, we need an AGI that will not be inimical to human beings or our chosen way of life. Eliezer Yudkowsky claims (Yudkowsky 2004) that the only way to sufficiently mitigate the risk to humanity is to ensure that machines always have an explicit and inalterable top-level goal to fulfill the “perfected” goals of humanity, his Coherent Extrapolated Volition or CEV. We believe, however, that humanity is so endlessly diverse that we will never find a coherent, non-conflicting set of ordered goals. On the other hand, the presence of functioning human society makes it clear that we should be able to find some common ground that we can all co-exist with. We contend that it is the overly abstract Principle-AI view of intelligence as “just” a problem-solver that is the true source of risk and that re-introducing more similarity with humans can cleanly avoid it. For example, Frans de Waal, the noted primatologist, points out (de Waal 2006) that any zoologist would classify humans as obligatorily gregarious since we “come from a long lineage of hierarchical animals for which life in groups is not an option but a survival strategy”. If we, therefore, extended the definition of intelligence to “The ability and desire to live and work together in an inclusive community to solve problems and improve life for all” there would be no existential risk to humans or anyone else. We have previous argued (Waser 2008) that acting ethically is an attractor in the state space of intelligent behavior for goal-driven systems and that humans are basically moral and that deviations from ethical behavior on the part of humans are merely the result of shortcomings in our own foresight and intelligence. As pointed out by James Q. Wilson (Wilson 1993), the real questions about human behaviors are not why we are so bad but “how and why most of us, most of the time, restrain our basic appetites for food, status, and sex within legal limits, and expect others to do the same.” Of course, extending the definition of intelligence in this way should also impact the view of our stated goal for AGI that we should promote. The goal of AGI cannot ethically be to produce slaves to solve the problems of the world but must be to create companions with differing capabilities and desires who will journey with us to create a better world. Ethics, Language, and Mind The first advantage of this new goal is that the study of human ethical motivations and ethical behavior rapidly leads us into very rich territory regarding the details in architecture of the mind required for such motivations and behaviors. As mentioned repeatedly by Noam Chomsky but first detailed in depth by John Rawls (Rawls 1971), the study of morality is highly analogous to the study of language since we have an innate moral faculty with operative principles that cannot be expressed in much the same way we have an innate language faculty with the same attributes. Chomsky transformed the study of language and mind by claiming (Chomsky 1986) that human beings are endowed with an innate program for language acquisition and developing a series of questions and fundamental distinctions. Chomsky and the community of linguists working within this framework have provided us with an exceptionally clear and compelling model of how such a cognitive faculty can be studied. As pointed out by Marc Hauser (Hauser 2006; Hauser, Young and Cushman 2008), both language and morality are cognitive systems that can be characterized in terms of principles or rules that can construct or generate an unlimited number and variety of representations. Both can be viewed as being configurable by parameters that alter the behavior of the system without altering the system itself and a theory of moral cognition would greatly benefit from drawing on parts of the terminology and theoretical apparatus of Chomsky’s Universal Grammar. Particularly relevant for the development of AGI, is their view that it is entirely likely that language is a mindinternal computational system that evolved for internal thought and planning and only later was co-opted for communication. Steven Pinker argues (Pinker 2007) that studying cross-cultural constants in language can provide insight into both our internal representation system and when we switch from one model to another. Hauser’s studies showing that language dramatically affects our moral perceptions argues that they both use the same underlying computational system and that studying crosscultural moral constants could not only answer what is moral but how we think and possibly even why we talk. Finally, the facts that both seem to be genetically endowed but socially conditioned and that we can watch the formation and growth of each mean that they can provide windows for observing autogeny in action.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Artificial Intelligence Based Approach for Identification of Current Transformer Saturation from Faults in Power Transformers

Protection systems have vital role in network reliability in short circuit mode and proper operating for relays. Current transformer often in transient and saturation under short circuit mode causes mal-operation of relays which will have undesirable effects. Therefore, proper and quick identification of Current transformer saturation is so important. In this paper, an Artificial Neural Network...

متن کامل

Evaluation of liquefaction potential based on CPT results using C4.5 decision tree

The prediction of liquefaction potential of soil due to an earthquake is an essential task in Civil Engineering. The decision tree is a tree structure consisting of internal and terminal nodes which process the data to ultimately yield a classification. C4.5 is a known algorithm widely used to design decision trees. In this algorithm, a pruning process is carried out to solve the problem of the...

متن کامل

Investigating students empathy and their school learning behaviors using Artificial Intelligence methods

Introduction Schools have a central role in cultivating students' personality by inculcating empathy. Empathy is the ability of one person to understand what another person is thinking and feeling in a given situation. The goal of this study is to explore the relationship between students’ empathy and their learning behaviors. The first task of our work is to classify students into clusters ba...

متن کامل

An Adaptive Learning Game for Autistic Children using Reinforcement Learning and Fuzzy Logic

This paper, presents an adapted serious game for rating social ability in children with autism spectrum disorder (ASD). The required measurements are obtained by challenges of the proposed serious game. The proposed serious game uses reinforcement learning concepts for being adaptive. It is based on fuzzy logic to evaluate the social ability level of the children with ASD. The game adapts itsel...

متن کامل

A Deep Model for Super-resolution Enhancement from a Single Image

This study presents a method to reconstruct a high-resolution image using a deep convolution neural network. We propose a deep model, entitled Deep Block Super Resolution (DBSR), by fusing the output features of a deep convolutional network and a shallow convolutional network. In this way, our model benefits from high frequency and low frequency features extracted from deep and shallow networks...

متن کامل

Artificial Intelligence in Healthcare

The article is a letter to the editor, so it has no abstract.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2009